General Testing Fisher , Neyman , Pearson , and Bayes

نویسنده

  • Ronald CHRISTENSEN
چکیده

One of the famous controversies in statistics is the dispute between Fisher and Neyman-Pearson about the proper way to conduct a test. Hubbard and Bayarri (2003) gave an excellent account of the issues involved in the controversy. Another famous controversy is between Fisher and almost all Bayesians. Fisher (1956) discussed one side of these controversies. Berger’s Fisher lecture attempted to create a consensus about testing; see Berger (2003). This article presents a simple example designed to clarify many of the issues in these controversies. Along the way many of the fundamental ideas of testing from all three perspectives are illustrated. The conclusion is that Fisherian testing is not a competitor to Neyman-Pearson (NP) or Bayesian testing because it examines a different problem. As with Berger and Wolpert (1984), I conclude that Bayesian testing is preferable to NP testing as a procedure for deciding between alternative hypotheses. The example involves data that have four possible outcomes, r = 1, 2, 3, 4. The distribution of the data depends on a parameter θ that takes on values θ = 0, 1, 2. The distributions are defined by their discrete densities f(r|θ) which are given in Table 1. In Section 2, f(r|0) is used to illustrate Fisherian testing. In Section 3, f(r|0) and f(r|2) are used to illustrate testing a simple null hypothesis versus a simple alternative hypothesis although Subsection 3.1 makes a brief reference to an NP test of f(r|1) versus f(r|2). Section 4 uses all three densities to illustrate testing a simple null versus a composite alternative. Section 5 discusses some issues that do not arise in this simple example. For those who want an explicit statement of the differences between Fisherian and NP testing, one appears at the beginning of Section 6 which also contains other conclusions and com-

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Models and Statistical Inference: The Controversy between Fisher and Neyman–Pearson

The main thesis of the paper is that in the case of modern statistics, the differences between the various concepts of models were the key to its formative controversies. The mathematical theory of statistical inference was mainly developed by Ronald A. Fisher, Jerzy Neyman, and Egon S. Pearson. Fisher on the one side and Neyman–Pearson on the other were involved often in a polemic controversy....

متن کامل

A comparative introduction to statistical inference and hypothesis testing

These are some notes on a very simple comparative introduction to four basic approaches of statistical inference—Fisher, Neyman–Pearson, Fisher/Neyman–Pearson hybrid, and Bayes—from a course on Quantitative & Statistical Reasoning at OU in Fall 2016. In particular, I hope to give a rough understanding of the differences between the frequentist and Bayesian paradigms, though they are not entirel...

متن کامل

P Values are not Error Probabilities

Confusion surrounding the reporting and interpretation of results of classical statistical tests is widespread among applied researchers. The confusion stems from the fact that most of these researchers are unaware of the historical development of classical statistical testing methods, and the mathematical and philosophical principles underlying them. Moreover, researchers erroneously believe t...

متن کامل

The Widest Cleft in Statistics - How and Why Fisher opposed Neyman and Pearson

The paper investigates the “widest cleft”, as Savage put it, between frequencists in the foundation of modern statistics: that opposing R.A. Fisher to Jerzy Neyman and Egon Pearson. Apart from deep personal confrontation through their lives, these scientists could not agree on methodology, on definitions, on concepts and on tools. Their premises and their conclusions widely differed and the two...

متن کامل

Microarrays, Empirical Bayes and the Two-Groups Model

The classic frequentist theory of hypothesis testing developed by Neyman, Pearson, and Fisher has a claim to being the Twentieth Century’s most influential piece of applied mathematics. Something new is happening in the Twenty-First Century: high throughput devices, such as microarrays, routinely require simultaneous hypothesis tests for thousands of individual cases, not at all what the classi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005